191 research outputs found

    Quantitative Value Strategy Beats the Market

    Get PDF
    The objective of this paper is to test if Quantitative Value Strategy works in Canadian market. Combining different value investment strategies from researchers and industry pioneers, a slightly revised existing strategy is used to back-­ā€test historical returns in the Canadian stock market. All the value strategies, commonly used by others, are small deviations from three main strategies mentioned in the literature review.  In order to identify stocks that have greater intrinsic value but at cheap prices, numerous ratios have been used in the screening process. There are main three steps for identifying the wanted stocks. First of all, eliminate all stocks with high possible earning manipulations. Next, identify cheap stocks using various financial ratios. Lastly, find the best quality stocks. Rankings are based on prior year-­ā€end financial data collected from Bloomberg. A long-­ā€short strategy is applied with one hundred twenty percent long and twenty percent in short positions. Based on prior year-­ā€end market capitalization, weights are assigned to each selected stocks. Once the portfolio is formed, these securities follow buy-­ā€and-­ā€hold strategy until the next rebalance cycle, which is the following year end.  After back-­ā€testing ten-­ā€year data from 2002 to 2012, the 120/20 portfolio annualized return is 11.53%, which is 4.92% higher than the annual SP/TSX index return. When looking at the returns alone, the constructed portfolio does outperform; however, the excess return is not risk adjusted as indicated in regression test. The value strategy is able to beat the market, but the excess returns mainly come from excess risks that the strategy has been taken. After risk adjustment, the constructed portfolio no longer beats the market; in other words, the famous value strategy fails to beat the market after taking consideration of risk factors.  Moreover, there are several potential implementation issues that could erode the performance return, such as transaction costs, dividend reinvestment and look-­ā€ ahead issue. Also, some financial behaviors are observed to prevent from beating the benchmark returns.&nbsp

    TBC-YOLOv7: a refined YOLOv7-based algorithm for tea bud grading detection

    Get PDF
    IntroductionAccurate grading identification of tea buds is a prerequisite for automated tea-picking based on machine vision system. However, current target detection algorithms face challenges in detecting tea bud grades in complex backgrounds. In this paper, an improved YOLOv7 tea bud grading detection algorithm TBC-YOLOv7 is proposed.MethodsThe TBC-YOLOv7 algorithm incorporates the transformer architecture design in the natural language processing field, integrating the transformer module based on the contextual information in the feature map into the YOLOv7 algorithm, thereby facilitating self-attention learning and enhancing the connection of global feature information. To fuse feature information at different scales, the TBC-YOLOv7 algorithm employs a bidirectional feature pyramid network. In addition, coordinate attention is embedded into the critical positions of the network to suppress useless background details while paying more attention to the prominent features of tea buds. The SIOU loss function is applied as the bounding box loss function to improve the convergence speed of the network.ResultThe results of the experiments indicate that the TBC-YOLOv7 is effective in all grades of samples in the test set. Specifically, the model achieves a precision of 88.2% and 86.9%, with corresponding recall of 81% and 75.9%. The mean average precision of the model reaches 87.5%, 3.4% higher than the original YOLOv7, with average precision values of up to 90% for one bud with one leaf. Furthermore, the F1 score reaches 0.83. The modelā€™s performance outperforms the YOLOv7 model in terms of the number of parameters. Finally, the results of the model detection exhibit a high degree of correlation with the actual manual annotation results ( R2 =0.89), with the root mean square error of 1.54.DiscussionThe TBC-YOLOv7 model proposed in this paper exhibits superior performance in vision recognition, indicating that the improved YOLOv7 model fused with transformer-style module can achieve higher grading accuracy on densely growing tea buds, thereby enables the grade detection of tea buds in practical scenarios, providing solution and technical support for automated collection of tea buds and the judging of grades

    A robust modulation classiļ¬cation method using convolutional neural networks

    Get PDF
    Automatic modulation classiļ¬cation (AMC) is a core technique in noncooperative communication systems. In particular, feature-based (FB) AMC algorithms have been widely studied. Current FB AMC methods are commonly designed for a limited set of modulation and lack of generalization ability; to tackle this challenge, a robust AMC method using convolutional neural networks (CNN) is proposed in this paper. In total, 15 diļ¬€erent modulation types are considered. The proposed method can classify the received signal directly without feature extracion, and it can automatically learn features from the received signals. The features learned by the CNN are presented and analyzed. The robust features of the received signals in a speciļ¬c SNR range are studied. The accuracy of classiļ¬cation using CNN is shown to be remarkable, particularly for low SNRs. The generalization ability of robust features is also proven to be excellent using the support vector machine (SVM). Finally, to help us better understand the process of feature learning, some outputs of intermediate layers of the CNN are visualized

    Visual Dexterity: In-hand Dexterous Manipulation from Depth

    Full text link
    In-hand object reorientation is necessary for performing many dexterous manipulation tasks, such as tool use in unstructured environments that remain beyond the reach of current robots. Prior works built reorientation systems that assume one or many of the following specific circumstances: reorienting only specific objects with simple shapes, limited range of reorientation, slow or quasistatic manipulation, the need for specialized and costly sensor suites, simulation-only results, and other constraints which make the system infeasible for real-world deployment. We overcome these limitations and present a general object reorientation controller that is trained using reinforcement learning in simulation and evaluated in the real world. Our system uses readings from a single commodity depth camera to dynamically reorient complex objects by any amount in real time. The controller generalizes to novel objects not used during training. It is successful in the most challenging test: the ability to reorient objects in the air held by a downward-facing hand that must counteract gravity during reorientation. The results demonstrate that the policy transfer from simulation to the real world can be accomplished even for dynamic and contact-rich tasks. Lastly, our hardware only uses open-source components that cost less than five thousand dollars. Such construction makes it possible to replicate the work and democratize future research in dexterous manipulation. Videos are available at: https://taochenshh.github.io/projects/visual-dexterity

    You Are What You Annotate: Towards Better Models through Annotator Representations

    Full text link
    Annotator disagreement is ubiquitous in natural language processing (NLP) tasks. There are multiple reasons for such disagreements, including the subjectivity of the task, difficult cases, unclear guidelines, and so on. Rather than simply aggregating labels to obtain data annotations, we instead try to directly model the diverse perspectives of the annotators, and explicitly account for annotators' idiosyncrasies in the modeling process by creating representations for each annotator (annotator embeddings) and also their annotations (annotation embeddings). In addition, we propose TID-8, The Inherent Disagreement - 8 dataset, a benchmark that consists of eight existing language understanding datasets that have inherent annotator disagreement. We test our approach on TID-8 and show that our approach helps models learn significantly better from disagreements on six different datasets in TID-8 while increasing model size by fewer than 1% parameters. By capturing the unique tendencies and subjectivity of individual annotators through embeddings, our representations prime AI models to be inclusive of diverse viewpoints.Comment: Accepted to Findings of EMNLP 202

    Spatio-Temporal AU Relational Graph Representation Learning For Facial Action Units Detection

    Full text link
    This paper presents our Facial Action Units (AUs) recognition submission to the fifth Affective Behavior Analysis in-the-wild Competition (ABAW). Our approach consists of three main modules: (i) a pre-trained facial representation encoder which produce a strong facial representation from each input face image in the input sequence; (ii) an AU-specific feature generator that specifically learns a set of AU features from each facial representation; and (iii) a spatio-temporal graph learning module that constructs a spatio-temporal graph representation. This graph representation describes AUs contained in all frames and predicts the occurrence of each AU based on both the modeled spatial information within the corresponding face and the learned temporal dynamics among frames. The experimental results show that our approach outperformed the baseline and the spatio-temporal graph representation learning allows our model to generate the best results among all ablated systems. Our model ranks at the 4th place in the AU recognition track at the 5th ABAW Competition

    Automatic Truss Design with Reinforcement Learning

    Full text link
    Truss layout design, namely finding a lightweight truss layout satisfying all the physical constraints, is a fundamental problem in the building industry. Generating the optimal layout is a challenging combinatorial optimization problem, which can be extremely expensive to solve by exhaustive search. Directly applying end-to-end reinforcement learning (RL) methods to truss layout design is infeasible either, since only a tiny portion of the entire layout space is valid under the physical constraints, leading to particularly sparse rewards for RL training. In this paper, we develop AutoTruss, a two-stage framework to efficiently generate both lightweight and valid truss layouts. AutoTruss first adopts Monte Carlo tree search to discover a diverse collection of valid layouts. Then RL is applied to iteratively refine the valid solutions. We conduct experiments and ablation studies in popular truss layout design test cases in both 2D and 3D settings. AutoTruss outperforms the best-reported layouts by 25.1% in the most challenging 3D test cases, resulting in the first effective deep-RL-based approach in the truss layout design literature.Comment: IJCAI2023. The codes are available at https://github.com/StigLidu/AutoTrus

    Adversarial Examples in the Physical World: A Survey

    Full text link
    Deep neural networks (DNNs) have demonstrated high vulnerability to adversarial examples. Besides the attacks in the digital world, the practical implications of adversarial examples in the physical world present significant challenges and safety concerns. However, current research on physical adversarial examples (PAEs) lacks a comprehensive understanding of their unique characteristics, leading to limited significance and understanding. In this paper, we address this gap by thoroughly examining the characteristics of PAEs within a practical workflow encompassing training, manufacturing, and re-sampling processes. By analyzing the links between physical adversarial attacks, we identify manufacturing and re-sampling as the primary sources of distinct attributes and particularities in PAEs. Leveraging this knowledge, we develop a comprehensive analysis and classification framework for PAEs based on their specific characteristics, covering over 100 studies on physical-world adversarial examples. Furthermore, we investigate defense strategies against PAEs and identify open challenges and opportunities for future research. We aim to provide a fresh, thorough, and systematic understanding of PAEs, thereby promoting the development of robust adversarial learning and its application in open-world scenarios.Comment: Adversarial examples, physical-world scenarios, attacks and defense
    • ā€¦
    corecore